181 research outputs found

    Applying Fuzzy Logic Techniques in Object-Oriented Software Development

    Get PDF
    In the last several years, a considerable number of object-oriented methods have been introduced to create robust, reusable and adaptable software systems [1], [2], [3], [4]. Object-oriented methods define a considerable number of rules which are generally expressed by using two-valued logic. For instance, an entity in a requirement specification is either accepted or rejected as a class. We consider two major problems in the way how rules are defined and applied in current object-oriented methods. The first problem, termed quantization problem, is a natural result of the incapacity of two-valued logic to express the approximate and inexact nature of a typical software development process. The second problem, termed contextual bias problem, arises because most of methods are not able to model the effects of the context on the validity of the method. To reduce these problems, we propose a new fuzzy logic-based object-oriented software development technique. This technique is not specific to a particular object-oriented method, but can be used to evaluate and enhance current methods. In addition, the application of fuzzy logic-based reasoning opens new perspectives to software development, such as fuzzy artifacts and accumulative software life-cycle

    An intelligent system for electrical energy management in buildings

    Get PDF
    Recent studies have highlighted that a significant part of the electrical energy consumption in residential and business buildings is due to an improper use of the electrical appliances. In this context, an automated power management system - capable of reducing energy wastes while preserving the perceived comfort level - would be extremely appealing. To this aim, we propose GreenBuilding, a sensor-based intelligent system that monitors the energy consumption and automatically controls the behavior of appliances used in a building. GreenBuilding has been implemented as a prototype and has been experimented in a real household scenario. The analysis of the experimental results highlights that GreenBuilding is able to provide significant energy savings

    Detection of traffic congestion and incidents from GPS trace analysis

    Get PDF
    This paper presents an expert system for detecting traffic congestion and incidents from real-time GPS data collected from GPS trackers or drivers’ smartphones. First, GPS traces are pre-processed and placed in the road map. Then, the system assigns to each road segment of the map a traffic state based on the speeds of the vehicles. Finally, it sends to the users traffic alerts based on a spatiotemporal analysis of the classified segments. Each traffic alert contains the affected area, a traffic state (e.g., incident, slowed traffic, blocked traffic), and the estimated velocity of vehicles in the area. The proposed system is intended to be a valuable support tool in traffic management for municipalities and citizens. The information produced by the system can be successfully employed to adopt actions for improving the city mobility, e.g., regulate vehicular traffic, or can be exploited by the users, who may spontaneously decide to modify their path in order to avoid the traffic jam. The elaboration performed by the expert system is independent of the context (urban o non-urban) and may be directly employed in several city road networks with almost no change of the system parameters, and without the need for a learning process or historical data. The experimental analysis was performed using a combination of simulated GPS data and real GPS data from the city of Pisa. The results on incidents show a detection rate of 91.6%, and an average detection time lower than 7 min. Regarding congestion, we show how the system is able to recognize different levels of congestion depending on different road use

    Enabling Compression in Tiny Wireless Sensor Nodes

    Get PDF
    A Wireless Sensor Network (WSN) is a network composed of sensor nodes communicating among themselves and deployed in large scale (from tens to thousands) for applications such as environmental, habitat and structural monitoring, disaster management, equipment diagnostic, alarm detection, and target classification. In WSNs, typically, sensor nodes are randomly distributed over the area under observation with very high density. Each node is a small device able to collect information from the surrounding environment through one or more sensors, to elaborate this information locally and to communicate it to a data collection centre called sink or base station. WSNs are currently an active research area mainly due to the potential of their applications. However, the deployment of a large scale WSN still requires solutions to a number of technical challenges that stem primarily from the features of the sensor nodes such as limited computational power, reduced communication bandwidth and small storage capacity. Further, since sensor nodes are typically powered by batteries with a limited capacity, energy is a primary constraint in the design and deployment of WSNs. Datasheets of commercial sensor nodes show that data communication is very expensive in terms of energy consumption, whereas data processing consumes significantly less: the energy cost of receiving or transmitting a single bit of information is approximately the same as that required by the processing unit for executing a thousand operations. On the other hand, the energy consumption of the sensing unit depends on the specific sensor type. In several cases, however, it is negligible with respect to the energy consumed by the communication unit and sometimes also by the processing unit. Thus, to extend the lifetime of a WSN, most of the energy conservation schemes proposed in the literature aim to minimize the energy consumption of the communication unit (Croce et al., 2008). To achieve this objective, two main approaches have been followed: power saving through duty cycling and in-network processing. Duty cycling schemes define coordinated sleep/wakeup schedules among nodes in the network. A detailed description of these techniques applied to WSNs can be found in (Anastasi et al., 2009). On the other hand, in-network processing consists in reducing the amount of information to be transmitted by means of aggregation (Boulis et al., 2003) (Croce et al., 2008) (Di Bacco et al., 2004) (Fan et al., 2007)

    Reducing Quantization Error and Contextual Bias problems in Software Development Processes by Applying Fuzzy Logic

    Get PDF
    Object-oriented methods define a considerable number of rules, which are generally expressed using two-valued logic. For example, an entity in a requirement specification is either accepted or rejected as a class. There are two major problems how rules are defined and applied in current methods. Firstly, two-valued logic cannot effectively express the approximate and inexact nature of a typical software development process. Secondly, the influence of contextual factors on rules is generally not modeled explicitly. This paper terms these problems as quantization error and contextual bias problems, respectively. To reduce these problems, we adopt fuzzy logic-based methodological rules. This approach is method independent and is useful for evaluating and enhancing current methods. In addition, the use of fuzzy-logic increases the adaptability and reusability of design models

    Designing Software Architectures As a Composition of Specializations of Knowledge Domains

    Get PDF
    This paper summarizes our experimental research and software development activities in designing robust, adaptable and reusable software architectures. Several years ago, based on our previous experiences in object-oriented software development, we made the following assumption: ‘A software architecture should be a composition of specializations of knowledge domains’. To verify this assumption we carried out three pilot projects. In addition to the application of some popular domain analysis techniques such as use cases, we identified the invariant compositional structures of the software architectures and the related knowledge domains. Knowledge domains define the boundaries of the adaptability and reusability capabilities of software systems. Next, knowledge domains were mapped to object-oriented concepts. We experienced that some aspects of knowledge could not be directly modeled in terms of object-oriented concepts. In this paper we describe our approach, the pilot projects, the experienced problems and the adopted solutions for realizing the software architectures. We conclude the paper with the lessons that we learned from this experience

    A MapReduce solution for associative classification of big data

    Get PDF
    Associative classifiers have proven to be very effective in classification problems. Unfortunately, the algorithms used for learning these classifiers are not able to adequately manage big data because of time complexity and memory constraints. To overcome such drawbacks, we propose a distributed association rule-based classification scheme shaped according to the MapReduce programming model. The scheme mines classification association rules (CARs) using a properly enhanced, distributed version of the well-known FP-Growth algorithm. Once CARs have been mined, the proposed scheme performs a distributed rule pruning. The set of survived CARs is used to classify unlabeled patterns. The memory usage and time complexity for each phase of the learning process are discussed, and the scheme is evaluated on seven real-world big datasets on the Hadoop framework, characterizing its scalability and achievable speedup on small computer clusters. The proposed solution for associative classifiers turns to be suitable to practically address big datasets even with modest hardware support. Comparisons with two state-of-the-art distributed learning algorithms are also discussed in terms of accuracy, model complexity, and computation time

    Solving the Environmental Economic Dispatch Problem with Prohibited Operating Zones in Microgrids using NSGA-II and TOPSIS

    Get PDF
    This paper presents a multi-objective optimization framework for the environmental economic dispatch problem in microgrids. Besides classic constraints, also prohibited operating zones and ramp-rate limits of the generators are here considered. Pareto-optimal solutions are generated through the NSGA-II algorithm with customized constraint handling. The optimal solution is selected with TOPSIS. Simulations carried out on a prototype microgrid showed the eectiveness of the proposed framework in handling scenarios with Pareto fronts having up to four discontinuities

    An overview of recent distributed algorithms for learning fuzzy models in Big Data classification

    Get PDF
    AbstractNowadays, a huge amount of data are generated, often in very short time intervals and in various formats, by a number of different heterogeneous sources such as social networks and media, mobile devices, internet transactions, networked devices and sensors. These data, identified as Big Data in the literature, are characterized by the popular Vs features, such as Value, Veracity, Variety, Velocity and Volume. In particular, Value focuses on the useful knowledge that may be mined from data. Thus, in the last years, a number of data mining and machine learning algorithms have been proposed to extract knowledge from Big Data. These algorithms have been generally implemented by using ad-hoc programming paradigms, such as MapReduce, on specific distributed computing frameworks, such as Apache Hadoop and Apache Spark. In the context of Big Data, fuzzy models are currently playing a significant role, thanks to their capability of handling vague and imprecise data and their innate characteristic to be interpretable. In this work, we give an overview of the most recent distributed learning algorithms for generating fuzzy classification models for Big Data. In particular, we first show some design and implementation details of these learning algorithms. Thereafter, we compare them in terms of accuracy and interpretability. Finally, we argue about their scalability

    On Distributed Fuzzy Decision Trees for Big Data

    Get PDF
    Fuzzy decision trees (FDTs) have shown to be an effective solution in the framework of fuzzy classification. The approaches proposed so far to FDT learning, however, have generally neglected time and space requirements. In this paper, we propose a distributed FDT learning scheme shaped according to the MapReduce programming model for generating both binary and multiway FDTs from big data. The scheme relies on a novel distributed fuzzy discretizer that generates a strong fuzzy partition for each continuous attribute based on fuzzy information entropy. The fuzzy partitions are, therefore, used as an input to the FDT learning algorithm, which employs fuzzy information gain for selecting the attributes at the decision nodes. We have implemented the FDT learning scheme on the Apache Spark framework. We have used ten real-world publicly available big datasets for evaluating the behavior of the scheme along three dimensions: 1) performance in terms of classification accuracy, model complexity, and execution time; 2) scalability varying the number of computing units; and 3) ability to efficiently accommodate an increasing dataset size. We have demonstrated that the proposed scheme turns out to be suitable for managing big datasets even with a modest commodity hardware support. Finally, we have used the distributed decision tree learning algorithm implemented in the MLLib library and the Chi-FRBCS-BigData algorithm, a MapReduce distributed fuzzy rule-based classification system, for comparative analysis. © 1993-2012 IEEE
    • …
    corecore